Goto

Collaborating Authors

 neuroscience literature


Reviews: Learning brain regions via large-scale online structured sparse dictionary learning

Neural Information Processing Systems

Technical quality: I would rate marginally below 3 if given the option. The model definition was clear. The algorithms followed a well established framework [14], and appeared solid. However, maybe a bit undermined by its presentation, the paper did not seem to clearly demonstrate the empirical advantage of introducing the Sobolev prior in the results section. Especially, in Figure 2, it was not clear in what aspect the SSMF method was better than the two alternatives.


Matching domain experts by training from scratch on domain knowledge

Luo, Xiaoliang, Sun, Guangzhi, Love, Bradley C.

arXiv.org Artificial Intelligence

Recently, large language models (LLMs) have outperformed human experts in predicting the results of neuroscience experiments (Luo et al., 2024). What is the basis for this performance? One possibility is that statistical patterns in that specific scientific literature, as opposed to emergent reasoning abilities arising from broader training, underlie LLMs' performance. To evaluate this possibility, we trained (next word prediction) a relatively small 124M-parameter GPT-2 model on 1.3 billion tokens of domain-specific knowledge. Despite being orders of magnitude smaller than larger LLMs trained on trillions of tokens, small models achieved expert-level performance in predicting neuroscience results. Small models trained on the neuroscience literature succeeded when they were trained from scratch using a tokenizer specifically trained on neuroscience text or when the neuroscience literature was used to finetune a pretrained GPT-2. Our results indicate that expert-level performance may be attained by even small LLMs through domain-specific, auto-regressive training approaches.


Towards Continual Reinforcement Learning: A Review and Perspectives

Khetarpal, Khimya | Riemer, Matthew (a:1:{s:5:"en_US";s:42:"IBM Research, Mila, University of Montreal";}) | Rish, Irina | Precup, Doina

Journal of Artificial Intelligence Research

In this article, we aim to provide a literature review of different formulations and approaches to continual reinforcement learning (RL), also known as lifelong or non-stationary RL. We begin by discussing our perspective on why RL is a natural fit for studying continual learning. We then provide a taxonomy of different continual RL formulations by mathematically characterizing two key properties of non-stationarity, namely, the scope and driver non-stationarity. This offers a unified view of various formulations. Next, we review and present a taxonomy of continual RL approaches. We go on to discuss evaluation of continual RL agents, providing an overview of benchmarks used in the literature and important metrics for understanding agent performance. Finally, we highlight open problems and challenges in bridging the gap between the current state of continual RL and findings in neuroscience. While still in its early days, the study of continual RL has the promise to develop better incremental reinforcement learners that can function in increasingly realistic applications where non-stationarity plays a vital role. These include applications such as those in the fields of healthcare, education, logistics, and robotics.